最近,在一系列独立作品中提出了几种培训策略和时间模型,用于隔离单词唇读。但是,尚未探索结合最佳策略和调查每个策略的影响的潜力。在本文中,我们系统地研究了最先进的数据增强方法,时间模型和其他培训策略的性能,例如自我验证和使用单词边界指标。我们的结果表明,时间掩盖(TM)是最重要的增强,其次是混合和密集连接的时间卷积网络(DC-TCN)是隔离单词唇读的最佳时间模型。使用自我验证和单词边界指标也是有益的,但程度较小。上述所有方法的组合导致分类精度为93.4%,这比LRW数据集的当前最新性能的绝对提高了4.6%。通过预先培训其他数据集,可以将性能进一步提高到94.1%。对各种培训策略的错误分析表明,绩效通过提高难以认可词的分类准确性来提高。
translated by 谷歌翻译
本文提出了一种使用视频中心化的变压器在视频中面部聚类的新方法。以前的作品经常采用对比度学习来学习框架级表示,并使用平均池来汇总沿时间维度的特征。这种方法可能无法完全捕获复杂的视频动态。此外,尽管在基于视频的对比学习方面取得了最新进展,但很少有人试图学习一个自我监视的聚类友好的面部表现,从而使视频面部聚集任务受益。为了克服这些局限性,我们的方法采用了变压器直接学习视频级表示,可以更好地反映视频中面部的时间变化属性,而我们还建议一个以视频为中心的自我监督框架来训练变压器模型。我们还调查了以自我为中心视频的面部聚类,这是一个快速出现的领域,尚未在与面部聚类有关的作品中进行研究。为此,我们介绍并发布了第一个名为EasyCom-Clustering的大规模以egipentric视频群集群数据集。我们在广泛使用的大爆炸理论(BBT)数据集和新的easycom群集数据集上评估了我们的建议方法。结果表明,我们以视频为中心的变压器的性能超过了两个基准测试的所有先前最新方法,对面部视频表现出了自我牵强的理解。
translated by 谷歌翻译
扩张的卷曲广泛用于深度语义分段模型,因为它们可以扩大过滤器的接收领域而不增加额外的权重,也不牺牲空间分辨率。然而,正如扩张的卷积滤波器在语义上有意义的轮廓上没有关于像素的位置知识,它们可能导致对象边界的模糊预测。另外,虽然扩张过滤器可以扩展其接收领域,但是采样像素的总数保持不变,这通常包括一小部分接收领域的总面积。灵感来自人类视觉系统中的横向抑制(LI)机制,我们提出了具有横向抑制(LI-CONVS)的扩张卷积以克服这些限制。介绍锂机制提高了卷积滤波器对语义对象边界的敏感性。此外,由于LI-DIVS也隐含地考虑从横向禁止的区域中的像素考虑,因此它们还可以以密度刻度提取特征。通过将锂致常规集成到Deeplabv3 +架构中,我们提出了横向抑制的不受欢迎的空间金字塔汇集(Li-Aspp),横向抑制的Mobilenet-V2(Li-MnV2)和横向抑制的Reset(Li-Reset)。在三个基准数据集(Pascal VOC 2012,Celebamask-HQ和Ade20k)的实验结果表明,我们的李氏分割模型越来越突出了所有这些的基线,从而验证了拟议的LI-CONN的有效性和一般性。
translated by 谷歌翻译
视频异常检测最近在弱监督下作为多个实例学习任务制定,其中每个视频都被视为要确定是否包含异常的片段。先前的努力主要集中于摘要本身的歧视,而无需对时间动力进行建模,这是指相邻摘要的变化。因此,我们提出了一种具有两个目标函数的歧视动力学学习(DDL)方法,即动态排名损耗和动态对齐损失。前者的目标是扩大正面和负袋之间的分数动态差距,而后者则在袋中进行特征动力学和得分动力学的时间对齐。此外,构建了一个局部意识的注意网络(LA-NET),以捕获全局相关性并重新校准跨段的位置偏好,然后是带有因果卷积的多层感知器以获得异常得分。实验结果表明,我们的方法在两个具有挑战性的基准(即UCF-Crime和XD-Violence)上取得了重大改进。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译